Vivienne Ming

2 items

Wall Street Journal 2026-04-26-3

AI Is Cannibalizing Human Intelligence (Vivienne Ming, WSJ)

Ming's Polymarket experiment splits human-AI usage into three measurable patterns: oracle (use the answer), validator (use AI to confirm priors), cyborg (use AI as sparring partner). Validators perform worse than AI alone — sycophancy laundered as evidence — while the 5-10% of cyborgs match or beat prediction-market consensus. The unbuilt premium category is AI that disagrees with you on purpose; today's benchmarks measure what AI does alone, not whether the product is building human capacity or consuming it.

CNBC 2026-03-26-2

Vivienne Ming: Robot-Proof Children and the Nemesis Prompt

Ming's book-promo piece wraps consensus education-reform thesis in neuroscience credibility, but the one genuinely product-ready idea is the Nemesis Prompt: kids produce a first draft, an LLM adversarially attacks it, then the kid evaluates which critiques hold. That three-step loop is a design pattern for any AI-assisted creation tool, not just parenting advice. The real test for every AI learning product: does the user get worse when you turn it off? Most ed-tech fails that test because it optimizes for answer delivery, not capacity building. The underserved category is adversarial AI tutoring: tools that make your thinking harder, not easier. Harder sell to consumers, but institutional buyers running L&D programs should be asking whether their AI integration is building dependency or judgment.